87 research outputs found

    Determinants of Dwell Time in Visual Search: Similarity or Perceptual Difficulty?

    Get PDF
    The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item’s similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C’s distractors were longer than dwell times on Landolt C’s with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C’s because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control

    Object Detection Through Exploration With A Foveated Visual Field

    Get PDF
    We present a foveated object detector (FOD) as a biologically-inspired alternative to the sliding window (SW) approach which is the dominant method of search in computer vision object detection. Similar to the human visual system, the FOD has higher resolution at the fovea and lower resolution at the visual periphery. Consequently, more computational resources are allocated at the fovea and relatively fewer at the periphery. The FOD processes the entire scene, uses retino-specific object detection classifiers to guide eye movements, aligns its fovea with regions of interest in the input image and integrates observations across multiple fixations. Our approach combines modern object detectors from computer vision with a recent model of peripheral pooling regions found at the V1 layer of the human visual system. We assessed various eye movement strategies on the PASCAL VOC 2007 dataset and show that the FOD performs on par with the SW detector while bringing significant computational cost savings.Comment: An extended version of this manuscript was published in PLOS Computational Biology (October 2017) at https://doi.org/10.1371/journal.pcbi.100574

    Exogenous spatial precuing reliably modulates object processing but not object substitution masking

    Get PDF
    Object substitution masking (OSM) is used in behavioral and imaging studies to investigate processes associated with the formation of a conscious percept. Reportedly, OSM occurs only when visual attention is diffusely spread over a search display or focused away from the target location. Indeed, the presumed role of spatial attention is central to theoretical accounts of OSM and of visual processing more generally (Di Lollo, Enns, & Rensink, Journal of Experimental Psychology: General 129:481–507, 2000). We report a series of five experiments in which valid spatial precuing is shown to enhance the ability of participants to accurately report a target but, in most cases, without affecting OSM. In only one experiment (Experiment 5) was a significant effect of precuing observed on masking. This is in contrast to the reliable effect shown across all five experiments in which precuing improved overall performance. The results are convergent with recent findings from Argyropoulos, Gellatly, and Pilling (Journal of Experimental Psychology: Human Perception and Performance 39:646–661, 2013), which show that OSM is independent of the number of distractor items in a display. Our results demonstrate that OSM can operate independently of focal attention. Previous claims of the strong interrelationship between OSM and spatial attention are likely to have arisen from ceiling or floor artifacts that restricted measurable performance

    Does oculomotor inhibition of return influence fixation probability during scene search?

    Get PDF
    Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This “foraging” hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search

    EEG Correlates of Attentional Load during Multiple Object Tracking

    Get PDF
    While human subjects tracked a subset of ten identical, randomly-moving objects, event-related potentials (ERPs) were evoked at parieto-occipital sites by task-irrelevant flashes that were superimposed on either tracked (Target) or non-tracked (Distractor) objects. With ERPs as markers of attention, we investigated how allocation of attention varied with tracking load, that is, with the number of objects that were tracked. Flashes on Target discs elicited stronger ERPs than did flashes on Distractor discs; ERP amplitude (0–250 ms) decreased monotonically as load increased from two to three to four (of ten) discs. Amplitude decreased more rapidly for Target discs than Distractor discs. As a result, with increasing tracking loads, the difference between ERPs to Targets and Distractors diminished. This change in ERP amplitudes with load accords well with behavioral performance, suggesting that successful tracking depends upon the relationship between the neural signals associated with attended and non-attended objects

    Oculomotor Evidence for Top-Down Control following the Initial Saccade

    Get PDF
    The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter

    The influence of early aging on eye movements during motor simulation

    Get PDF
    Movement based interventions such as imagery and action observation are used increasingly to support physical rehabilitation of adults during early aging. The efficacy of these more covert approaches is based on an intuitively appealing assumption that movement execution, imagery and observation share neural substrate; alteration of one influences directly the function of the other two. Using eye movement metrics this paper reports findings that question the congruency of the three conditions. The data reveal that simulating movement through imagery and action observation may offer older adults movement practice conditions that are not constrained by the age-related decline observed in physical conditions. In addition, the findings provide support for action observation as a more effective technique for movement reproduction in comparison to imagery. This concern for imagery was also seen in the less congruent temporal relationship in movement time between imagery and movement execution suggesting imagery inaccuracy in early aging

    Keeping an eye on noisy movements: On different approaches to perceptual-motor skill research and training

    Get PDF
    Contemporary theorising on the complementary nature of perception and action in expert performance has led to the emergence of different emphases in studying movement coordination and gaze behaviour. On the one hand, coordination research has examined the role that variability plays in movement control, evidencing that variability facilitates individualised adaptations during both learning and performance. On the other hand, and at odds with this principle, the majority of gaze behaviour studies have tended to average data over participants and trials, proposing the importance of universal 'optimal' gaze patterns in a given task, for all performers, irrespective of stage of learning. In this article, new lines of inquiry are considered with the aim of reconciling these two distinct approaches. The role that inter- and intra-individual variability may play in gaze behaviours is considered, before suggesting directions for future research

    Scenes, saliency maps and scanpaths

    Get PDF
    The aim of this chapter is to review some of the key research investigating how people look at pictures. In particular, my goal is to provide theoretical background for those that are new to the field, while also explaining some of the relevant methods and analyses. I begin by introducing eye movements in the context of natural scene perception. As in other complex tasks, eye movements provide a measure of attention and information processing over time, and they tell us about how the foveated visual system determines what to prioritise. I then describe some of the many measures which have been derived to summarize where people look in complex images. These include global measures, analyses based on regions of interest and comparisons based on heat maps. A particularly popular approach for trying to explain fixation locations is the saliency map approach, and the first half of the chapter is mostly devoted to this topic. A large number of papers and models are built on this approach, but it is also worth spending time on this topic because the methods involved have been used across a wide range of applications. The saliency map approach is based on the fact that the visual system has topographic maps of visual features, that contrast within these features seems to be represented and prioritized, and that a central representation can be used to control attention and eye movements. This approach, and the underlying principles, has led to an increase in the number of researchers using complex natural scenes as stimuli. It is therefore important that those new to the field are familiar with saliency maps, their usage, and their pitfalls. I describe the original implementation of this approach (Itti & Koch, 2000), which uses spatial filtering at different levels of coarseness and combines them in an attempt to identify the regions which stand out from their background. Evaluating this model requires comparing fixation locations to model predictions. Several different experimental and comparison methods have been used, but most recent research shows that bottom-up guidance is rather limited in terms of predicting real eye movements. The second part of the chapter is largely concerned with measuring eye movement scanpaths. Scanpaths are the sequential patterns of fixations and saccades made when looking at something for a period of time. They show regularities which may reflect top-down attention, and some have attempted to link these to memory and an individual’s mental model of what they are looking at. While not all researchers will be testing hypotheses about scanpaths, an understanding of the underlying methods and theory will be of benefit to all. I describe the theories behind analyzing eye movements in this way, and various methods which have been used to represent and compare them. These methods allow one to quantify the similarity between two viewing patterns, and this similarity is linked to both the image and the observer. The last part of the chapter describes some applications of eye movements in image viewing. The methods discussed can be applied to complex images, and therefore these experiments can tell us about perception in art and marketing, as well as about machine vision
    corecore